20 research outputs found

    From Self-Interpreters to Normalization by Evaluation

    Get PDF
    We characterize normalization by evaluation as the composition of a self-interpreter with a self-reducer using a special representation scheme, in the sense of Mogensen (1992). We do so by deriving in a systematic way an untyped normalization by evaluation algorithm from a standard interpreter for the ?-calculus. The derived algorithm is not novel and indeed other published algorithms may be obtained in the same manner through appropriate adaptations to the representation scheme

    Efficient normalization by evaluation

    Get PDF
    International audienceDependently typed theorem provers allow arbitrary terms in types. It is convenient to identify large classes of terms during type checking, hence many such systems provision some form of conversion rule. A standard algorithm for testing the convertibility of two types consists in normalizing them, then testing for syntactic equality of the normal forms. Normalization by evaluation is a standard technique enabling the use of existing compilers and runtimes for functional languages to implement normalizers, without peaking under the hood, for a fast yet cheap system in terms of implementation effort. Our focus is on performance of untyped normalization by evaluation. We demonstrate that with the aid of a standard optimization for higher order programs (namely uncurrying) and the reuse of pattern matching facilities of the evaluator for datatypes, we may obtain a normalizer that evaluates non-functional values about as fast as the underlying evaluator, but as an added benefit can also fully normalize functional values — or to put it another way, partially evaluates functions efficiently

    Conception d'un noyau de vérification de preuves pour le λΠ-calcul modulo

    No full text
    In recent years, the emergence of feature rich and mature interactive proof assistants has enabled large formalization efforts of high-profile conjectures and results previously established only by pen and paper. A medley of incompatible and philosophically diverging logics are at the core of all these proof assistants. Cousineau and Dowek (2007) have proposed the λΠ-calculus modulo as a universal target framework for other front-end proof languages and environments. We explain in this thesis how this particularly simple formalism allows for a small, modular and efficient proof checker upon which the consistency of entire systems can be made to rely upon. Proofs increasingly rely on computation both in the large, as exemplified by the proof of the four colour theorem by Gonthier (2007), and in the small following the SSReflect methodoly and supporting tools. Encoding proofs from other systems in the λΠ-calculus modulo bakes yet more computation into the proof terms. We show how to make the proof checking problem manageable by turning entire proof terms into functional programs and compiling them in one go using off-the-shelf compilers for standard programming languages. We use untyped normalization by evaluation (NbE) as an enabling technology and show how to optimize previous instances of it found in the literature. Through a single change to the interpretation of proof terms, we arrive at a representation of proof terms using higher order abstract syntax (HOAS) allowing for a proof checking algorithm devoid of any explicit typing context for all Pure Type Systems (PTS). We observe that this novel algorithm is a generalization to dependent types of a type checking algorithm found in the HOL proof assistants enabling on-the-fly checking of proofs. We thus arrive at a purely functional system with no explicit state, where all proofs are checked by construction. We formally verify in Coq the correspondence of the type system on higher order terms lying behind this algorithm with respect to the standard typing rules for PTS. This line of work can be seen as connecting two historic strands of proof assistants: LCF and its descendents, where proofs of untyped or simply typed formulae are checked by construction, versus Automath and its descendents, where proofs of dependently typed terms are checked a posteriori. The algorithms presented in this thesis are at the core of a new proof checker called Dedukti and in some cases have been transferred to the more mature platform that is Coq. In joint work with Denes, we show how to extend the untyped NbE algorithm to the syntax and reduction rules of the Calculus of Inductive Constructions (CIC). In joint work with Burel, we generalize previous work by Cousineau and Dowek (2007) on the embedding into the λΠ-calculus modulo of a large class of PTS to inductive types, pattern matching and fixpoint operators.Ces dernières années ont vu l'émergence d'assistants interactifs de preuves riches en fonctionnalités et d'une grande maturité d'implémentation, ce qui a permis l'essor des grosses formalisations de résultats papier et la résolution de conjectures célèbres. Mais autant d'assistants de preuves reposent sur presque autant de logiques comme fondements théoriques. Cousineau et Dowek (2007) proposent le λΠ-calcul modulo comme un cadre universel cible pour tous ces environnement de démonstration. Nous montrons dans cette thèse comment ce formalisme particulièrement simple admet une implémentation d'un vérificateur de taille modeste mais pour autant modulaire et efficace, à la correction de laquelle on peut réduire la cohérence de systèmes tout entiers. Un nombre croissant de preuves dépendent de calculs intensifs comme dans la preuve du théorème des quatre couleurs de Gonthier (2007). Les méthodologies telles que SSReflect et les outils attenants privilégient les preuves contenant de nombreux petits calculs plutôt que les preuves purement déductives. L'encodage de preuves provenant d'autres systèmes dans le λΠ-calcul modulo introduit d'autres calculs encore. Nous montrons comment gérer la taille de ces calculs en interprétant les preuves tout entières comme des programmes fonctionnels, que l'on peut compiler vers du code machine à l'aide de compilateurs standards et clé-en-main. Nous employons pour cela une variante non typée de la normalisation par évaluation (NbE), et montrons comment optimiser de précédentes formulation de celle-ci. Au travers d'une seule petite modification à l'interprétation des termes de preuves, nous arrivons aussi à une représentation des preuves en syntaxe abstraite d'ordre supérieur (HOAS), qui admet naturellement un algorithme de typage sans aucun contexte de typage explicite. Nous généralisons cet algorithme à tous les systèmes de types purs (PTS). Nous observons que cet algorithme est une extension à un cadre avec types dépendants de l'algorithme de typage des assistants de preuves de la famille HOL. Cette observation nous amène à développer une architecture à la LCF pour une large classe de PTS, c'est à dire une architecture où tous les termes de preuves sont corrects par construction, a priori donc, et n'ont ainsi pas besoin d'être vérifié a posteriori. Nous prouvons formellement en Coq un théorème de correspondance entre les système de types sans contexte et leur pendant standard avec contexte explicite. Ces travaux jettent un pont entre deux lignées historiques d'assistants de preuves : la lignée issue de LCF à qui nous empruntons l'architecture du noyau, et celle issue de Automath, dont nous héritons la notion de types dépendants. Les algorithmes présentés dans cette thèse sont au coeur d'un nouveau vérificateur de preuves appelé Dedukti et ont aussi été transférés vers un système plus mature : Coq. En collaboration avec Dénès, nous montrons comment étendre la NbE non typée pour gérer la syntaxe et les règles de réduction du calcul des constructions inductives (CIC). En collaboration avec Burel, nous généralisons des travaux précédents de Cousineau et Dowek (2007) sur l'encodage dans le λΠ-calcul modulo d'une large classe de PTS à des PTS avec types inductifs, motifs de filtrage et opérateurs de point fixe

    Linear Haskell: practical linearity in a higher-order polymorphic language

    Get PDF
    Linear type systems have a long and storied history, but not a clear path forward to integrate with existing languages such as OCaml or Haskell. In this paper, we study a linear type system designed with two crucial properties in mind: backwards-compatibility and code reuse across linear and non-linear users of a library. Only then can the benefits of linear types permeate conventional functional programming. Rather than bifurcate types into linear and non-linear counterparts, we instead attach linearity to function arrows. Linear functions can receive inputs from linearly-bound values, but can also operate over unrestricted, regular values. To demonstrate the efficacy of our linear type system - both how easy it can be integrated in an existing language implementation and how streamlined it makes it to write programs with linear types - we implemented our type system in GHC, the leading Haskell compiler, and demonstrate two kinds of applications of linear types: mutable data with pure interfaces; and enforcing protocols in I/O-performing functions

    Full reduction at full throttle

    Get PDF
    International audienceEmerging trends in proof styles and new applications of interactive proof assistants exploit the computational facilities of the provided proof language, reaping enormous benefits in proof size and convenience to the user. However, the resulting proof objects really put the proof assistant to the test in terms of computational time required to check them. We present a novel translation of the terms of the full Calculus of (Co)Inductive Constructions to OCAML programs. Building on this translation, we further present a new fully featured version of COQ that offloads much of the computation required during proof checking to a vanilla, state of the art and fine tuned compiler. This modular scheme yields substantial performance improvements over existing systems at a reduced implementation cost. The work presented here builds on previous work described in [GL02], but we place particular emphasis in this paper on the fact that this scheme is in fact an instance of untyped normalization by evaluation [FR04, Lin05, AHN08, Boe10]

    Multi-level Contextual Type Theory

    Full text link
    Contextual type theory distinguishes between bound variables and meta-variables to write potentially incomplete terms in the presence of binders. It has found good use as a framework for concise explanations of higher-order unification, characterize holes in proofs, and in developing a foundation for programming with higher-order abstract syntax, as embodied by the programming and reasoning environment Beluga. However, to reason about these applications, we need to introduce meta^2-variables to characterize the dependency on meta-variables and bound variables. In other words, we must go beyond a two-level system granting only bound variables and meta-variables. In this paper we generalize contextual type theory to n levels for arbitrary n, so as to obtain a formal system offering bound variables, meta-variables and so on all the way to meta^n-variables. We obtain a uniform account by collapsing all these different kinds of variables into a single notion of variabe indexed by some level k. We give a decidable bi-directional type system which characterizes beta-eta-normal forms together with a generalized substitution operation.Comment: In Proceedings LFMTP 2011, arXiv:1110.668

    A Lambda Term Representation Inspired by Linear Ordered Logic

    Get PDF
    We introduce a new nameless representation of lambda terms inspired by ordered logic. At a lambda abstraction, number and relative position of all occurrences of the bound variable are stored, and application carries the additional information where to cut the variable context into function and argument part. This way, complete information about free variable occurrence is available at each subterm without requiring a traversal, and environments can be kept exact such that they only assign values to variables that actually occur in the associated term. Our approach avoids space leaks in interpreters that build function closures. In this article, we prove correctness of the new representation and present an experimental evaluation of its performance in a proof checker for the Edinburgh Logical Framework. Keywords: representation of binders, explicit substitutions, ordered contexts, space leaks, Logical Framework.Comment: In Proceedings LFMTP 2011, arXiv:1110.668

    Genetic and phenotypic spectrum associated with IFIH1 gain-of-function

    Get PDF
    IFIH1 gain-of-function has been reported as a cause of a type I interferonopathy encompassing a spectrum of autoinflammatory phenotypes including Aicardi–Goutières syndrome and Singleton Merten syndrome. Ascertaining patients through a European and North American collaboration, we set out to describe the molecular, clinical and interferon status of a cohort of individuals with pathogenic heterozygous mutations in IFIH1. We identified 74 individuals from 51 families segregating a total of 27 likely pathogenic mutations in IFIH1. Ten adult individuals, 13.5% of all mutation carriers, were clinically asymptomatic (with seven of these aged over 50 years). All mutations were associated with enhanced type I interferon signaling, including six variants (22%) which were predicted as benign according to multiple in silico pathogenicity programs. The identified mutations cluster close to the ATP binding region of the protein. These data confirm variable expression and nonpenetrance as important characteristics of the IFIH1 genotype, a consistent association with enhanced type I interferon signaling, and a common mutational mechanism involving increased RNA binding affinity or decreased efficiency of ATP hydrolysis and filament disassembly rate

    CoqInE: Translating the Calculus of Inductive Constructions into the λΠ-calculus Modulo

    No full text
    We show how to translate the Calculus of Inductive Constructions (CIC) as implemented by Coq into the λΠ-calculus modulo, a proposed common backend proof format for heterogeneous proof assistants.

    The λΠ-calculus Modulo as a Universal Proof Language, in "Second Workshop on Proof Exchange for Theorem Proving (PxTP)", CEUR-WS.org, 2012, vol. 878, Available at: ceur-ws.org/Vol-878/paper2.pdf

    No full text
    The λΠ-calculus forms one of the vertices in Barendregt’s λ-cube and has been used as the core language for a number of logical frameworks. Following earlier extensions of natural deduction [14], Cousineau and Dowek [11] generalize the definitional equality of this well studied calculus to an arbitrary congruence generated by rewrite rules, which allows for more faithful encodings of foreign logics. This paper motivates the resulting language, the λΠ-calculus modulo, as a universal proof language, capable of expressing proofs from many other systems without losing their computational properties. We further show how to very simply and efficiently check proofs from this language. We have implemented this scheme in a proof checker called Dedukti.
    corecore